9 research outputs found

    Data-driven building energy efficiency prediction based on envelope heat losses using physics-informed neural networks

    Full text link
    The analytical prediction of building energy performance in residential buildings based on the heat losses of its individual envelope components is a challenging task. It is worth noting that this field is still in its infancy, with relatively limited research conducted in this specific area to date, especially when it comes for data-driven approaches. In this paper we introduce a novel physics-informed neural network model for addressing this problem. Through the employment of unexposed datasets that encompass general building information, audited characteristics, and heating energy consumption, we feed the deep learning model with general building information, while the model's output consists of the structural components and several thermal properties that are in fact the basic elements of an energy performance certificate (EPC). On top of this neural network, a function, based on physics equations, calculates the energy consumption of the building based on heat losses and enhances the loss function of the deep learning model. This methodology is tested on a real case study for 256 buildings located in Riga, Latvia. Our investigation comes up with promising results in terms of prediction accuracy, paving the way for automated, and data-driven energy efficiency performance prediction based on basic properties of the building, contrary to exhaustive energy efficiency audits led by humans, which are the current status quo.Comment: 8 pages, 1 figur

    Transfer learning for day-ahead load forecasting: a case study on European national electricity demand time series

    Full text link
    Short-term load forecasting (STLF) is crucial for the daily operation of power grids. However, the non-linearity, non-stationarity, and randomness characterizing electricity demand time series renders STLF a challenging task. Various forecasting approaches have been proposed for improving STLF, including neural network (NN) models which are trained using data from multiple electricity demand series that may not necessary include the target series. In the present study, we investigate the performance of this special case of STLF, called transfer learning (TL), by considering a set of 27 time series that represent the national day-ahead electricity demand of indicative European countries. We employ a popular and easy-to-implement NN model and perform a clustering analysis to identify similar patterns among the series and assist TL. In this context, two different TL approaches, with and without the clustering step, are compiled and compared against each other as well as a typical NN training setup. Our results demonstrate that TL can outperform the conventional approach, especially when clustering techniques are considered

    Targeted demand response for flexible energy communities using clustering techniques

    Full text link
    The present study proposes clustering techniques for designing demand response (DR) programs for commercial and residential prosumers. The goal is to alter the consumption behavior of the prosumers within a distributed energy community in Italy. This aggregation aims to: a) minimize the reverse power flow at the primary substation, occuring when generation from solar panels in the local grid exceeds consumption, and b) shift the system wide peak demand, that typically occurs during late afternoon. Regarding the clustering stage, we consider daily prosumer load profiles and divide them across the extracted clusters. Three popular machine learning algorithms are employed, namely k-means, k-medoids and agglomerative clustering. We evaluate the methods using multiple metrics including a novel metric proposed within this study, namely peak performance score (PPS). The k-means algorithm with dynamic time warping distance considering 14 clusters exhibits the highest performance with a PPS of 0.689. Subsequently, we analyze each extracted cluster with respect to load shape, entropy, and load types. These characteristics are used to distinguish the clusters that have the potential to serve the optimization objectives by matching them to proper DR schemes including time of use, critical peak pricing, and real-time pricing. Our results confirm the effectiveness of the proposed clustering algorithm in generating meaningful flexibility clusters, while the derived DR pricing policy encourages consumption during off-peak hours. The developed methodology is robust to the low availability and quality of training datasets and can be used by aggregator companies for segmenting energy communities and developing personalized DR policies

    A comparative assessment of deep learning models for day-ahead load forecasting: Investigating key accuracy drivers

    Full text link
    Short-term load forecasting (STLF) is vital for the effective and economic operation of power grids and energy markets. However, the non-linearity and non-stationarity of electricity demand as well as its dependency on various external factors renders STLF a challenging task. To that end, several deep learning models have been proposed in the literature for STLF, reporting promising results. In order to evaluate the accuracy of said models in day-ahead forecasting settings, in this paper we focus on the national net aggregated STLF of Portugal and conduct a comparative study considering a set of indicative, well-established deep autoregressive models, namely multi-layer perceptrons (MLP), long short-term memory networks (LSTM), neural basis expansion coefficient analysis (N-BEATS), temporal convolutional networks (TCN), and temporal fusion transformers (TFT). Moreover, we identify factors that significantly affect the demand and investigate their impact on the accuracy of each model. Our results suggest that N-BEATS consistently outperforms the rest of the examined models. MLP follows, providing further evidence towards the use of feed-forward networks over relatively more sophisticated architectures. Finally, certain calendar and weather features like the hour of the day and the temperature are identified as key accuracy drivers, providing insights regarding the forecasting approach that should be used per case.Comment: Keywords: Short-Term Load Forecasting, Deep Learning, Ensemble, N-BEATS, Temporal Convolution, Forecasting Accurac

    Assessing MITRE ATT&CK Risk Using a Cyber-Security Culture Framework

    No full text
    The MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) Framework provides a rich and actionable repository of adversarial tactics, techniques, and procedures. Its innovative approach has been broadly welcomed by both vendors and enterprise customers in the industry. Its usage extends from adversary emulation, red teaming, behavioral analytics development to a defensive gap and SOC (Security Operations Center) maturity assessment. While extensive research has been done on analyzing specific attacks or specific organizational culture and human behavior factors leading to such attacks, a holistic view on the association of both is currently missing. In this paper, we present our research results on associating a comprehensive set of organizational and individual culture factors (as described on our developed cyber-security culture framework) with security vulnerabilities mapped to specific adversary behavior and patterns utilizing the MITRE ATT&CK framework. Thus, exploiting MITRE ATT&CK’s possibilities towards a scientific direction that has not yet been explored: security assessment and defensive design, a step prior to its current application domain. The suggested cyber-security culture framework was originally designed to aim at critical infrastructures and, more specifically, the energy sector. Organizations of these domains exhibit a co-existence and strong interaction of the IT (Information Technology) and OT (Operational Technology) networks. As a result, we emphasize our scientific effort on the hybrid MITRE ATT&CK for Enterprise and ICS (Industrial Control Systems) model as a broader and more holistic approach. The results of our research can be utilized in an extensive set of applications, including the efficient organization of security procedures as well as enhancing security readiness evaluation results by providing more insights into imminent threats and security risks

    Towards a Framework for B2B Integration Readiness Assessment and Guided Support of the SMEs

    No full text
    Abstract. In today's world with companies operating in a global business environment. Most enterprises, and especially the SMEs, lack the necessary business culture, technical and non-technical infrastructure and economic flexibility in order to efficiently adjust to the environment of a B2B integration framework. This paper proposes an Enterprise Integration Assessment Framework (EIAF) and its support software system that aims to aid enterprises in adopting a multienterprise (B2B) integration approach by evaluating its situational status and by estimating the expected integration impact based on the evaluation results

    A State-of-the-Art Analysis of the Current Public Data Landscape from a Functional, Semantic and Technical Perspective

    No full text
    Open Government Data initiatives and particularly Open Government Data portals have proliferated since the late 2000’s. A comprehensive analysis of the capabilities and potential of these initiatives is currently missing from the recent research literature. In order to address this gap, the paper at hand aims towards analyzing the landscape of Open Governmental Data in the European Union from a functional, semantic and technical perspective. Our research focused on the collection and categorization of an indicative number of public data sources for each of the 27 European Union country-members through investigating their services and characteristics. By modeling and classifying the data sources according to their key attributes, we were able to proceed to their statistical analysis and assessment in terms of their content, licensing, multilingual support, acquisition, ease of access, provision and data format. Our results portray the current quality of Public Sector Information infrastructures and highlight what still needs to be done in order to make public data truly open and readily available for researchers, citizens, companies and innovation in general

    Paving the Way for Future Research in ICT for Governance and Policy Modelling

    No full text
    It took more than several years to persuade governments to change their attitude towards collaborative, evidence-based governance with strong ICT enablement: a decade which included a worldwide economic crisis, radical changes in the socioeconomic landscape imposed not only by wars, but also of the rise and development of countries with huge manpower and natural resources. Public unrest, very low turnover in democratic activities and a growing lack of trust in governments and their policies are currently characterising societies and countries of our world. In parallel, the world has become increasingly interconnected, complex and fast-evolving, making the effects of individual behaviour and of policy choices much less predictable. We are increasingly dealing with highly improbable events and “wicked problems”, which are often characterized as Black Swans since they appear very rarely, with extreme impact and very limited predictability - at least with the tools deployed. Moreover, within the recent financial crisis we experience the failure of governments to predict or drive even more obvious and important societal goals: public sector income, unemployment, growth, standard of living are more and more becoming “wicked problems” themselves, than deterministically sought-for targets. The paradox is that at the same time, the amount of data available to governments for making sense of the socio-economic environment has increased exponentially, either provided through sensors and the Internet of Things, or through crowdsourced citizens’ ideas and criticism posted on social media. Within this framework, the European Commission has decided to launch the CROSSROAD project within the FP7 programme, aiming at building a consensus-driven Research Roadmap to consolidate and advance research in a new, yet highly fragmented, domain and to provide strategic directions for the future of research in ICT for Governance and Policy Modelling. The main goal of the CROSSROAD project has been to drive the identification of emerging technologies, new governance models and novel application scenarios in the field of governance and policy modelling, leading to the structuring of a beyond the state-of-the-art research agenda, fully embraced by research and practice communities, as comprehensively presented in this book. This book is a result of the collaborative effort of several internationally renowned policy scientists, complex system theorists, governance researchers, economists, management science and ICT experts, under the guidance of the CROSSROAD team. Seville, Athens, Brussels, Lausanne and Samos Island have been welcoming venues for this group, offering opportunities for highly innovative and productive brainstorming. But, clearly, the on-line deliberation toolset deployed by CROASSROAD was the key differentiating factor, engaging several hundred researchers, public sector decision-makers, industry representatives and citizens. May we all remember this collective experience in the times to come, a little proud that we tried the difficult road: not to just “do the things right”, but to make an attempt towards “doing the right things” for ICT-enabled Governance and evidence-based decision making.JRC.J.3-Information Societ
    corecore